23 research outputs found

    Gestion adaptative de l'Ă©nergie pour les infrastructures de type grappe ou nuage

    Get PDF
    National audienceDans un contexte d'utilisation de ressources hĂ©tĂ©rogĂšnes, la performance reste le critĂšre traditionnel pour la planification de capacitĂ©. Mais, de nos jours, tenir compte de la variable Ă©nergĂ©tique est devenu une nĂ©cessitĂ©. Cet article s'attaque au problĂšme de l'efficacitĂ© Ă©nergĂ©tique pour la rĂ©partition de charge dans les systĂšmes distribuĂ©s. Nous proposons une gestion efficace en Ă©nergie des ressources par l'ajout de fonctionnalitĂ©s de gestion des Ă©vĂšnements liĂ©s Ă  l'Ă©nergie, selon des rĂšgles dĂ©finies par l'utilisateur. Nous implĂ©mentons ces fonctionnalitĂ©s au sein de l'intergiciel DIET, qui permet de gĂ©rer la rĂ©partition de charge afin de mettre en Ă©vidence le cout des compromis entre la performance et la consommation d'Ă©nergie. Notre solution et son intĂ©rĂȘt sont validĂ©s au travers d'expĂ©riences en Ă©valuant la performance et la consommation Ă©lectrique mettant en concurrence trois politiques d'ordonnancement. Nous mettons en avant le gain obtenu en terme Ă©nergĂ©tique tout en essayant de minimiser les Ă©carts de performance. Nous offrons Ă©galement Ă  l'intergiciel responsable de l'ordonnancement une rĂ©activitĂ© face aux variations Ă©nergĂ©tiques

    Energy-Aware Server Provisioning by Introducing Middleware-Level Dynamic Green Scheduling

    Get PDF
    International audienceSeveral approaches to reduce the power consumption of datacenters have been described in the literature, most of which aim to improve energy efficiency by trading off performance for reducing power consumption. However, these approaches do not always provide means for administrators and users to specify how they want to explore such trade-offs. This work provides techniques for assigning jobs to distributed resources, exploring energy efficient resource provisioning. We use middleware-level mechanisms to adapt resource allocation according to energy-related events and user-defined rules. A proposed framework enables developers, users and system administrators to specify and explore energy efficiency and performance trade-offs without detailed knowledge of the underlying hardware platform. Evaluation of the proposed solution under three scheduling policies shows gains of 25% in energy-efficiency with minimal impact on the overall application performance. We also evaluate reactivity in the adaptive resource provisioning

    Nu@ge: Towards a solidary and responsible cloud computing service

    Get PDF
    Best Paper AwardInternational audienceThe adoption of cloud computing is still limited by several legal concerns from companies. One of those reasons is the data sovereignty, as data can be physically host in sensible locations, resulting in a lack of control for companies. In this paper, we present the Nu@ge project aimed at building a federation of container-sized datacenter on the French territory. Nu@ge provides a software stack that enables companies to put independent datacenters in cooperation in a national mesh. Additionally, a prototype of a container-sized datacenter has been validated and patented

    Parallel Differential Evolution approach for Cloud workflow placements under simultaneous optimization of multiple objectives

    Get PDF
    International audienceThe recent rapid expansion of Cloud computing facilities triggers an attendant challenge to facility providers and users for methods for optimal placement of workflows on distributed resources, under the often-contradictory impulses of minimizing makespan, energy consumption, and other metrics. Evolutionary Optimization techniques that from theoretical principles are guaranteed to provide globally optimum solutions, are among the most powerful tools to achieve such optimal placements. Multi-Objective Evolutionary algorithms by design work upon contradictory objectives, gradually evolving across generations towards a converged Pareto front representing optimal decision variables – in this case the mapping of tasks to resources on clusters. However the computation time taken by such algorithms for convergence makes them prohibitive for real time placements because of the adverse impact on makespan. This work describes parallelization, on the same cluster, of a Multi-Objective Differential Evolution method (NSDE-2) for optimization of workflow placement, and the attendant speedups that bring the implicit accuracy of the method into the realm of practical utility. Experimental validation is performed on a real-life testbed using diverse Cloud traces. The solutions under different scheduling policies demonstrate significant reduction in energy consumption with some improvement in makespan

    Ensemble-based network edge processing

    Get PDF
    Estimating energy costs for an industrial process can be computationally intensive and time consuming, especially as it can involve data collection from different (distributed) monitoring sensors. Industrial processes have an implicit complexity involving the use of multiple appliances (devices/ sub-systems) attached to operation schedules, electrical capacity and optimisation setpoints which need to be determined for achieving operational cost objectives. Addressing the complexity associated with an industrial workflow (i.e. range and type of tasks) leads to increased requirements on the computing infrastructure. Such requirements can include achieving execution performance targets per processing unit within a particular size of infrastructure i.e. processing & data storage nodes to complete a computational analysis task within a specific deadline. The use of ensemblebased edge processing is identifed to meet these Quality of Service targets, whereby edge nodes can be used to distribute the computational load across a distributed infrastructure. Rather than relying on a single edge node, we propose the combined use of an ensemble of such nodes to overcome processing, data privacy/ security and reliability constraints. We propose an ensemble-based network processing model to facilitate distributed execution of energy simulations tasks within an industrial process. A scenario based on energy profiling within a fisheries plant is used to illustrate the use of an edge ensemble. The suggested approach is however general in scope and can be used in other similar application domains

    MDSC: Modelling Distributed Stream Processing across the Edge-to-Cloud Continuum

    Get PDF
    International audienceThe growth of the Internet of Things is resulting in an explosion of data volumes at the Edge of the Internet. To reduce costs incurred due to data movement and centralized cloud-based processing, it is becoming increasingly important to process and analyze such data closer to the data sources. Exploiting Edge computing capabilities for stream-based processing is however challenging. It requires addressing the complex characteristics and constraints imposed by all the resources along the data path, as well as the large set of heterogeneous data processing and management frameworks. Consequently, the community needs tools that can facilitate the modeling of this complexity and can integrate the various components involved. In this work, we introduce MDSC, a hierarchical approach for modeling distributed stream-based applications on Edge-to-Cloud continuum infrastructures. We demonstrate how MDSC can be applied to a concrete real-life ML-based application -early earthquake warning - to help answer questions such as: when is it worth decentralizing the classification load from the Cloud to the Edge and how

    Autonomics at the edge: resource orchestration for edge native applications

    Get PDF
    With increasing availability of edge computing resources there is a need to develop edge orchestration and resource management techniques to support application resilience and performance. Similar to the use of containers and microservices for cloud environments, it is important to understand the key attributes that characterise “edge native” applications. As edge devices increase in their autonomy and intelligence, orchestration techniques are needed to respond to changes in device properties, availability, security credentials, migration and network connectivity protocols. Implementing autonomics techniques for edge computing can increase resilience of the interaction between devices and applications reducing execution time and cost. The use of autonomics at the network edge can address the complexity requirement of industrial workflows to overcome execution latency, data privacy and reliability constraints

    Edge enhanced deep learning system for large-scale video stream analytics.

    Get PDF
    Applying deep learning models to large-scale IoT data is a compute-intensive task and needs significant computational resources. Existing approaches transfer this big data from IoT devices to a central cloud where inference is performed using a machine learning model. However, the network connecting the data capture source and the cloud platform can become a bottleneck. We address this problem by distributing the deep learning pipeline across edge and cloudlet/fog resources. The basic processing stages and trained models are distributed towards the edge of the network and on in-transit and cloud resources. The proposed approach performs initial processing of the data close to the data source at edge and fog nodes, resulting in significant reduction in the data that is transferred and stored in the cloud. Results on an object recognition scenario show 71\% efficiency gain in the throughput of the system by employing a combination of edge, in-transit and cloud resources when compared to a cloud-only approach.N/
    corecore